-
Why are you starting this discussion?Bug What GitHub Actions topic or product is this about?Actions Cache Discussion DetailsI'm seeing these warnings in FE Build workflow of https://github.com/AmazeeLabs/silverback-template repo:
For each warning the timeout is about 5 minutes. Which slows down the workflow significantly. I checked other workflows/projects using the same workflow step, but everything looks totally fine there. So I guess it's an issue with a particular cache node attached to Here's an example workflow run where the issue exists: https://github.com/AmazeeLabs/silverback-template/actions/runs/15268252233 |
Beta Was this translation helpful? Give feedback.
Replies: 18 comments 7 replies
-
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
We also have a problem with caching, in our workflow we install the dependencies with Composer , but it is now much slower than before, and these errors can be seen:
and
It seems the issue with cache service degradation that happened yesterday is still not completely resolved. |
Beta Was this translation helpful? Give feedback.
-
we're also seeing what appears to be some flavor of this issue but on a different IP
It appears to be something wrong on this IP
|
Beta Was this translation helpful? Give feedback.
-
Same issue here, Warning: Failed to restore: getCacheEntry failed: Request timeout |
Beta Was this translation helpful? Give feedback.
-
same problem here, the same ip as rjtferreira
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Same here. Just noticed it.
|
Beta Was this translation helpful? Give feedback.
-
In our case, the workflows that use this action were failing |
Beta Was this translation helpful? Give feedback.
-
In my case, it was fixed by updating to |
Beta Was this translation helpful? Give feedback.
-
@Leksat in your |
Beta Was this translation helpful? Give feedback.
-
Having the same issue. The longer builds does allow for more coffee breaks though |
Beta Was this translation helpful? Give feedback.
-
Having the same issue:
|
Beta Was this translation helpful? Give feedback.
-
Confirmed: we had major issues with caching yesterday. Because we do a dependency install as the first step of most of our workflows, a deploy that takes 10 minutes normally took 60 minutes yesterday. From our perspective the GH artifact cache is down. Is anyone tracking this internally? |
Beta Was this translation helpful? Give feedback.
-
Same issue in our repo
|
Beta Was this translation helpful? Give feedback.
-
Hi @Leksat, I'm one of the engineers who works on this service. The tl;dr is that we shut down v3 of the service recently, but only proceeded to terminate the resources hosting the service 2 days ago. This means that what was previously failing quietly (since April) has started failing a bit more loudly, but only for clients attempting to use that older service. From what I can tell, you're right that the annotations are being produced in This may mean that action is running a bit slower as it will not be making use of the cache service, but otherwise the only side effect is the annotation you are seeing. For others seeing this message, please ensure that your packages, and the dependencies of those packages, are all using an up-to-date version of the cache service. |
Beta Was this translation helpful? Give feedback.
-
@GhadimiR I don't know if that helps, but I don't think this is the issue. In our setup, we run some actions on hosted runners and some on our self-hosted runners. We are using the
But on our self-hosted runners it tries to retrieve them from that box which is not routable. This is not a problem in the self-hosted runner VM as this IP is not routable from any outside networks either (I know we are not restricting access to IPs for our runners):
These logs are from the build of the same SHA, just a few minutes apart. Our runner version on these self-hosted runners is 2.317.0 and the OS is Ubuntu 22. What is going wrong? |
Beta Was this translation helpful? Give feedback.
-
Bumping to |
Beta Was this translation helpful? Give feedback.
-
Using the following workflow addition before whatever uses the cache seemed to provoke the ACTIONS_CACHE_SERVICE_V2 to work.
|
Beta Was this translation helpful? Give feedback.
Hi @Leksat,
I'm one of the engineers who works on this service. The tl;dr is that we shut down v3 of the service recently, but only proceeded to terminate the resources hosting the service 2 days ago. This means that what was previously failing quietly (since April) has started failing a bit more loudly, but only for clients attempting to use that older service.
From what I can tell, you're right that the annotations are being produced in
jetify-com/devbox-install-action
, but more specifically it appears that these annotations are coming from the nix-installer-action, and from what I see, that package specifically does attempt to interact with the decommissioned cache service.This may me…